Performance Analysis of Bit-Width Reduced Floating-Point Arithmetic Units in FPGAs: A Case Study of Neural Network-Based Face Detector

نویسندگان

  • Yongsoon Lee
  • Younhee Choi
  • Moon Ho Lee
  • Seok-Bum Ko
چکیده

This paper implements a field programmable gate array(FPGA-) based face detector using a neural network (NN) and the bitwidth reduced floating-point arithmetic unit (FPU). The analytical error model, using the maximum relative representation error (MRRE) and the average relative representation error (ARRE), is developed to obtain the maximum and average output errors for the bit-width reduced FPUs. After the development of the analytical error model, the bit-width reduced FPUs and an NN are designed using MATLAB and VHDL. Finally, the analytical (MATLAB) results, along with the experimental (VHDL) results, are compared. The analytical results and the experimental results show conformity of shape. We demonstrate that incremented reductions in the number of bits used can produce significant cost reductions including area, speed, and power.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An FPGA-Based Face Detector Using Neural Network and a Scalable Floating Point Unit

The study implemented an FPGA-based face detector using Neural Networks and a scalable Floating Point arithmetic Unit (FPU). The FPU provides dynamic range and reduces the bit of the arithmetic unit more than fixed point method does. These features led to reduction in the memory so that it is efficient for neural networks system with large size data bits. The arithmetic unit occupies 39~45% of ...

متن کامل

Feasibility of Floating-point Arithmetic in Fpga Based Artificial Neural Networks

Aritificial Neural Networks (ANNs) implemented on FieldProgrammable Gate Arrays (FPGAs) have traditionally used a minimal allowable precision of 16-bit fixed-point. This approach is considered to be an optimal precision vs. area tradeoff for FPGA based ANNs because quality of performance is maintained, while making efficient use of the limited hardware resources available in a FPGA. However, li...

متن کامل

Using Floating-Point Arithmetic on FPGAs to Accelerate Scientific N-Body Simulations

This paper investigates the usage of floating-point arithmetic on FPGAs for N-Body simulation in natural science. The common aspect of these applications is the simple computing structure where forces between a particle and its surrounding particles are summed up. The role of reduced precision arithmetic is discussed, and our implementation of a floating-point arithmetic library with parameteri...

متن کامل

Chapter 2 ON THE ARITHMETIC PRECISION FOR IMPLEMENTING BACK-PROPAGATION NETWORKS ON FPGA: A CASE STUDY

Artificial Neural Networks (ANNs) are inherently parallel architectures which represent a natural fit for custom implementation on FPGAs. One important implementation issue is to determine the numerical precision format that allows an optimum tradeoff between precision and implementation areas. Standard single or double precision floating-point representations minimize quantization errors while...

متن کامل

Experimental Demonstration of the Fixed-Point Sparse Coding Performance

The Sparse Coding (SC) model has been proved to be among the best neural networks which are mainly used in unsupervised feature learning for many applications. Running a sparse coding algorithm is a time-consuming task due to its large scale and processing characteristics, which naturally leads to investigating FPGA acceleration. Fixed-point arithmetic can be used when implementing SC in FPGAs ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • EURASIP J. Emb. Sys.

دوره 2009  شماره 

صفحات  -

تاریخ انتشار 2009